2,130 research outputs found

    Rate and distortion redundancies for universal source coding with respect to a fidelity criterion

    Get PDF
    Rissanen has shown that there exist universal noiseless codes for {Xi} with per-letter rate redundancy as low as (K log N)/2N, where N is the blocklength and K is the number of source parameters. we derive an analogous result for universal source coding with respect to the squared error fidelity criterion: there exist codes with per-letter rate redundancy as low as (K log N)/2N and per-letter distortion (averaged over X^N and θ) at most D(R)[1 + K/N], where D(r) is an average distortion-rate function and K is now the number of parameters in the code

    Weighted universal transform coding: universal image compression with the Karhunen-Loève transform

    Get PDF
    We introduce a two-stage universal transform code for image compression. The code combines Karhunen-Loève transform coding with weighted universal bit allocation (WUBA) in a two-stage algorithm analogous to the algorithm for weighted universal vector quantization (WUVQ). The encoder uses a collection of transform/bit allocation pairs rather than a single transform/bit allocation pair (as in JPEG) or a single transform with a variety of bit allocations (as in WUBA). We describe both an encoding algorithm for achieving optimal compression using a collection of transform/bit allocation pairs and a technique for designing locally optimal collections of transform/bit allocation pairs. We demonstrate the performance using the mean squared error distortion measure. On a sequence of combined text and gray scale images, the algorithm achieves up to a 2 dB improvement over a JPEG style coder using the discrete cosine transform (DCT) and an optimal collection of bit allocations, up to a 3 dB improvement over a JPEG style coder using the DCT and a single (optimal) bit allocation, up to 6 dB over an entropy constrained WUVQ with first- and second-stage vector dimensions equal to 16 and 4 respectively, and up to a 10 dB improvement over an entropy constrained vector quantizer (ECVQ) with a vector dimension of 4

    A vector quantization approach to universal noiseless coding and quantization

    Get PDF
    A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which the first stage code can be regarded as a vector quantizer that “quantizes” the input data of length n to one of a fixed collection of block codes. We apply the generalized Lloyd algorithm to the first-stage quantizer, using induced measures of rate and distortion, to design locally optimal two-stage codes. On a source of medical images, two-stage variable-rate vector quantizers designed in this way outperform standard (one-stage) fixed-rate vector quantizers by over 9 dB. The tail of the operational distortion-rate function of the first-stage quantizer determines the optimal rate of convergence of the redundancy of a universal sequence of two-stage codes. We show that there exist two-stage universal noiseless codes, fixed-rate quantizers, and variable-rate quantizers whose per-letter rate and distortion redundancies converge to zero as (k/2)n -1 log n, when the universe of sources has finite dimension k. This extends the achievability part of Rissanen's theorem from universal noiseless codes to universal quantizers. Further, we show that the redundancies converge as O(n-1) when the universe of sources is countable, and as O(n-1+ϵ) when the universe of sources is infinite-dimensional, under appropriate conditions

    Weighted universal image compression

    Get PDF
    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB

    A mean-removed variation of weighted universal vector quantization for image coding

    Get PDF
    Weighted universal vector quantization uses traditional codeword design techniques to design locally optimal multi-codebook systems. Application of this technique to a sequence of medical images produces a 10.3 dB improvement over standard full search vector quantization followed by entropy coding at the cost of increased complexity. In this proposed variation each codebook in the system is given a mean or 'prediction' value which is subtracted from all supervectors that map to the given codebook. The chosen codebook's codewords are then used to encode the resulting residuals. Application of the mean-removed system to the medical data set achieves up to 0.5 dB improvement at no rate expense

    A Progressive Universal Noiseless Coder

    Get PDF
    The authors combine pruned tree-structured vector quantization (pruned TSVQ) with Itoh's (1987) universal noiseless coder. By combining pruned TSVQ with universal noiseless coding, they benefit from the “successive approximation” capabilities of TSVQ, thereby allowing progressive transmission of images, while retaining the ability to noiselessly encode images of unknown statistics in a provably asymptotically optimal fashion. Noiseless compression results are comparable to Ziv-Lempel and arithmetic coding for both images and finely quantized Gaussian sources

    Precision Enhancement of 3D Surfaces from Multiple Compressed Depth Maps

    Full text link
    In texture-plus-depth representation of a 3D scene, depth maps from different camera viewpoints are typically lossily compressed via the classical transform coding / coefficient quantization paradigm. In this paper we propose to reduce distortion of the decoded depth maps due to quantization. The key observation is that depth maps from different viewpoints constitute multiple descriptions (MD) of the same 3D scene. Considering the MD jointly, we perform a POCS-like iterative procedure to project a reconstructed signal from one depth map to the other and back, so that the converged depth maps have higher precision than the original quantized versions.Comment: This work was accepted as ongoing work paper in IEEE MMSP'201
    corecore